Normal view MARC view ISBD view

Shared-memory parallelism can be simple, fast, and scalable

By: Shun, Julian.
Material type: materialTypeLabelBookSeries: ACM books #15. Publisher: UK: Morgan & Claypool, 2017Edition: 1st ed.Description: xv, 426 p. : col. ill.; 23.5 cm.ISBN: 9781970001884.Subject(s): Algorithms | Hash table | Parallel string Algorithms | Analysis tools | Priority Updates | Parallel graph Algorithms | Linear-work parallel graph connectivity | Parallel wavelet tree construction | Parallel Lempel-Ziv factorization | Parallel computation | Parallel cartesian tree | Cache-oblivious triangle computations | Ligra | Parallel programming | Ligra++ | Mehrkernprozessor | Parallel computersDDC classification: 005.275 Summary: Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under many different settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era
Tags from this library: No tags from this library for this title. Log in to add tags.
Item type Current location Call number Status Date due Barcode
Books 005.275 SHU (Browse shelf) Available 031615

Includes bibliographical references and index.,
"This is a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award."--Back cover.


Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under many different settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha